3,575 research outputs found

    The Changing Shape of Legal Information

    Get PDF
    As IT, Reference and Instruction librarians, we have experienced significant changes to the shape of legal information over the past five years. The changes are to both the very nature of legal information and how we perceive it. This can be illustrated by our use of the phrase legal information . Depending on your age and life situation, the words legal information will have created specific images in your mind. These changes in perception challenge how we develop our programs of legal research instruction

    Artequakt: Generating tailored biographies from automatically annotated fragments from the web

    Get PDF
    The Artequakt project seeks to automatically generate narrativebiographies of artists from knowledge that has been extracted from the Web and maintained in a knowledge base. An overview of the system architecture is presented here and the three key components of that architecture are explained in detail, namely knowledge extraction, information management and biography construction. Conclusions are drawn from the initial experiences of the project and future progress is detailed

    Improving Post-Hurricane Katrina Forest Management with MODIS Time Series Products

    Get PDF
    Hurricane damage to forests can be severe, causing millions of dollars of timber damage and loss. To help mitigate loss, state agencies require information on location, intensity, and extent of damaged forests. NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) time series data products offers a potential means for state agencies to monitor hurricane-induced forest damage and recovery across a broad region. In response, a project was conducted to produce and assess 250 meter forest disturbance and recovery maps for areas in southern Mississippi impacted by Hurricane Katrina. The products and capabilities from the project were compiled to aid work of the Mississippi Institute for Forest Inventory (MIFI). A series of NDVI change detection products were computed to assess hurricane induced damage and recovery. Hurricane-induced forest damage maps were derived by computing percent change between MODIS MOD13 16-day composited NDVI pre-hurricane "baseline" products (2003 and 2004) and post-hurricane NDVI products (2005). Recovery products were then computed in which post storm 2006, 2007, 2008 and 2009 NDVI data was each singularly compared to the historical baseline NDVI. All percent NDVI change considered the 16-day composite period of August 29 to September 13 for each year in the study. This provided percent change in the maximum NDVI for the 2 week period just after the hurricane event and for each subsequent anniversary through 2009, resulting in forest disturbance products for 2005 and recovery products for the following 4 years. These disturbance and recovery products were produced for the Mississippi Institute for Forest Inventory's (MIFI) Southeast Inventory District and also for the entire hurricane impact zone. MIFI forest inventory products were used as ground truth information for the project. Each NDVI percent change product was classified into 6 categories of forest disturbance intensity. Stand age and stand type raster data, also provided by MIFI, were used along with the forest disturbance/recovery products to create forest damage stratification products integrating 3 stand type classes, 6 stand age classes, and 6 forest disturbance intensity classes. This stratification product will be used to aid MIFI timber inventory planning and to prepare for damage assessments due to future hurricane events. Validation of MODIS percent NDVI change products was performed by comparing the MODIS percent NDVI change products to those from Landsat data for the same time and MIFI inventory district area

    Generating adaptive hypertext content from the semantic web

    Get PDF
    Accessing and extracting knowledge from online documents is crucial for therealisation of the Semantic Web and the provision of advanced knowledge services. The Artequakt project is an ongoing investigation tackling these issues to facilitate the creation of tailored biographies from information harvested from the web. In this paper we will present the methods we currently use to model, consolidate and store knowledge extracted from the web so that it can be re-purposed as adaptive content. We look at how Semantic Web technology could be used within this process and also how such techniques might be used to provide content to be published via the Semantic Web

    Using Protege for automatic ontology instantiation

    Get PDF
    This paper gives an overview on the use of Protégé in the Artequakt system, which integrated Protégé with a set of natural language tools to automatically extract knowledge about artists from web documents and instantiate a given ontology. Protégé was also linked to structured templates that generate documents from the knowledge fragments it maintains

    Web based knowledge extraction and consolidation for automatic ontology instantiation

    Get PDF
    The Web is probably the largest and richest information repository available today. Search engines are the common access routes to this valuable source. However, the role of these search engines is often limited to the retrieval of lists of potentially relevant documents. The burden of analysing the returned documents and identifying the knowledge of interest is therefore left to the user. The Artequakt system aims to deploy natural language tools to automatically ex-tract and consolidate knowledge from web documents and instantiate a given ontology, which dictates the type and form of knowledge to extract. Artequakt focuses on the domain of artists, and uses the harvested knowledge to gen-erate tailored biographies. This paper describes the latest developments of the system and discusses the problem of knowledge consolidation

    Identification of Small Molecule Inhibitors of Clostridium perfringens ε-Toxin Cytotoxicity Using a Cell-Based High-Throughput Screen

    Get PDF
    The Clostridium perfringens epsilon toxin, a select agent, is responsible for a severe, often fatal enterotoxemia characterized by edema in the heart, lungs, kidney, and brain. The toxin is believed to be an oligomeric pore-forming toxin. Currently, there is no effective therapy for countering the cytotoxic activity of the toxin in exposed individuals. Using a robust cell-based high-throughput screening (HTS) assay, we screened a 151,616-compound library for the ability to inhibit ε-toxin-induced cytotoxicity. Survival of MDCK cells exposed to the toxin was assessed by addition of resazurin to detect metabolic activity in surviving cells. The hit rate for this screen was 0.6%. Following a secondary screen of each hit in triplicate and assays to eliminate false positives, we focused on three structurally-distinct compounds: an N-cycloalkylbenzamide, a furo[2,3-b]quinoline, and a 6H-anthra[1,9-cd]isoxazol. None of the three compounds appeared to inhibit toxin binding to cells or the ability of the toxin to form oligomeric complexes. Additional assays demonstrated that two of the inhibitory compounds inhibited ε-toxin-induced permeabilization of MDCK cells to propidium iodide. Furthermore, the two compounds exhibited inhibitory effects on cells pre-treated with toxin. Structural analogs of one of the inhibitors identified through the high-throughput screen were analyzed and provided initial structure-activity data. These compounds should serve as the basis for further structure-activity refinement that may lead to the development of effective anti-ε-toxin therapeutics
    corecore